61 research outputs found

    Fundus image analysis for automatic screening of ophthalmic pathologies

    Full text link
    En los ultimos años el número de casos de ceguera se ha reducido significativamente. A pesar de este hecho, la Organización Mundial de la Salud estima que un 80% de los casos de pérdida de visión (285 millones en 2010) pueden ser evitados si se diagnostican en sus estadios más tempranos y son tratados de forma efectiva. Para cumplir esta propuesta se pretende que los servicios de atención primaria incluyan un seguimiento oftalmológico de sus pacientes así como fomentar campañas de cribado en centros proclives a reunir personas de alto riesgo. Sin embargo, estas soluciones exigen una alta carga de trabajo de personal experto entrenado en el análisis de los patrones anómalos propios de cada enfermedad. Por lo tanto, el desarrollo de algoritmos para la creación de sistemas de cribado automáticos juga un papel vital en este campo. La presente tesis persigue la identificacion automática del daño retiniano provocado por dos de las patologías más comunes en la sociedad actual: la retinopatía diabética (RD) y la degenaración macular asociada a la edad (DMAE). Concretamente, el objetivo final de este trabajo es el desarrollo de métodos novedosos basados en la extracción de características de la imagen de fondo de ojo y clasificación para discernir entre tejido sano y patológico. Además, en este documento se proponen algoritmos de pre-procesado con el objetivo de normalizar la alta variabilidad existente en las bases de datos publicas de imagen de fondo de ojo y eliminar la contribución de ciertas estructuras retinianas que afectan negativamente en la detección del daño retiniano. A diferencia de la mayoría de los trabajos existentes en el estado del arte sobre detección de patologías en imagen de fondo de ojo, los métodos propuestos a lo largo de este manuscrito evitan la necesidad de segmentación de las lesiones o la generación de un mapa de candidatos antes de la fase de clasificación. En este trabajo, Local binary patterns, perfiles granulométricos y la dimensión fractal se aplican de manera local para extraer información de textura, morfología y tortuosidad de la imagen de fondo de ojo. Posteriormente, esta información se combina de diversos modos formando vectores de características con los que se entrenan avanzados métodos de clasificación formulados para discriminar de manera óptima entre exudados, microaneurismas, hemorragias y tejido sano. Mediante diversos experimentos, se valida la habilidad del sistema propuesto para identificar los signos más comunes de la RD y DMAE. Para ello se emplean bases de datos públicas con un alto grado de variabilidad sin exlcuir ninguna imagen. Además, la presente tesis también cubre aspectos básicos del paradigma de deep learning. Concretamente, se presenta un novedoso método basado en redes neuronales convolucionales (CNNs). La técnica de transferencia de conocimiento se aplica mediante el fine-tuning de las arquitecturas de CNNs más importantes en el estado del arte. La detección y localización de exudados mediante redes neuronales se lleva a cabo en los dos últimos experimentos de esta tesis doctoral. Cabe destacar que los resultados obtenidos mediante la extracción de características "manual" y posterior clasificación se comparan de forma objetiva con las predicciones obtenidas por el mejor modelo basado en CNNs. Los prometedores resultados obtenidos en esta tesis y el bajo coste y portabilidad de las cámaras de adquisión de imagen de retina podrían facilitar la incorporación de los algoritmos desarrollados en este trabajo en un sistema de cribado automático que ayude a los especialistas en la detección de patrones anomálos característicos de las dos enfermedades bajo estudio: RD y DMAE.In last years, the number of blindness cases has been significantly reduced. Despite this promising news, the World Health Organisation estimates that 80% of visual impairment (285 million cases in 2010) could be avoided if diagnosed and treated early. To accomplish this purpose, eye care services need to be established in primary health and screening campaigns should be a common task in centres with people at risk. However, these solutions entail a high workload for trained experts in the analysis of the anomalous patterns of each eye disease. Therefore, the development of algorithms for automatic screening system plays a vital role in this field. This thesis focuses on the automatic identification of the retinal damage provoked by two of the most common pathologies in the current society: diabetic retinopathy (DR) and age-related macular degeneration (AMD). Specifically, the final goal of this work is to develop novel methods, based on fundus image description and classification, to characterise the healthy and abnormal tissue in the retina background. In addition, pre-processing algorithms are proposed with the aim of normalising the high variability of fundus images and removing the contribution of some retinal structures that could hinder in the retinal damage detection. In contrast to the most of the state-of-the-art works in damage detection using fundus images, the methods proposed throughout this manuscript avoid the necessity of lesion segmentation or the candidate map generation before the classification stage. Local binary patterns, granulometric profiles and fractal dimension are locally computed to extract texture, morphological and roughness information from retinal images. Different combinations of this information feed advanced classification algorithms formulated to optimally discriminate exudates, microaneurysms, haemorrhages and healthy tissues. Through several experiments, the ability of the proposed system to identify DR and AMD signs is validated using different public databases with a large degree of variability and without image exclusion. Moreover, this thesis covers the basics of the deep learning paradigm. In particular, a novel approach based on convolutional neural networks is explored. The transfer learning technique is applied to fine-tune the most important state-of-the-art CNN architectures. Exudate detection and localisation tasks using neural networks are carried out in the last two experiments of this thesis. An objective comparison between the hand-crafted feature extraction and classification process and the prediction models based on CNNs is established. The promising results of this PhD thesis and the affordable cost and portability of retinal cameras could facilitate the further incorporation of the developed algorithms in a computer-aided diagnosis (CAD) system to help specialists in the accurate detection of anomalous patterns characteristic of the two diseases under study: DR and AMD.En els últims anys el nombre de casos de ceguera s'ha reduït significativament. A pesar d'este fet, l'Organització Mundial de la Salut estima que un 80% dels casos de pèrdua de visió (285 milions en 2010) poden ser evitats si es diagnostiquen en els seus estadis més primerencs i són tractats de forma efectiva. Per a complir esta proposta es pretén que els servicis d'atenció primària incloguen un seguiment oftalmològic dels seus pacients així com fomentar campanyes de garbellament en centres regentats per persones d'alt risc. No obstant això, estes solucions exigixen una alta càrrega de treball de personal expert entrenat en l'anàlisi dels patrons anòmals propis de cada malaltia. Per tant, el desenrotllament d'algoritmes per a la creació de sistemes de garbellament automàtics juga un paper vital en este camp. La present tesi perseguix la identificació automàtica del dany retiniano provocat per dos de les patologies més comunes en la societat actual: la retinopatia diabètica (RD) i la degenaración macular associada a l'edat (DMAE) . Concretament, l'objectiu final d'este treball és el desenrotllament de mètodes novedodos basats en l'extracció de característiques de la imatge de fons d'ull i classificació per a discernir entre teixit sa i patològic. A més, en este document es proposen algoritmes de pre- processat amb l'objectiu de normalitzar l'alta variabilitat existent en les bases de dades publiques d'imatge de fons d'ull i eliminar la contribució de certes estructures retinianas que afecten negativament en la detecció del dany retiniano. A diferència de la majoria dels treballs existents en l'estat de l'art sobre detecció de patologies en imatge de fons d'ull, els mètodes proposats al llarg d'este manuscrit eviten la necessitat de segmentació de les lesions o la generació d'un mapa de candidats abans de la fase de classificació. En este treball, Local binary patterns, perfils granulometrics i la dimensió fractal s'apliquen de manera local per a extraure informació de textura, morfologia i tortuositat de la imatge de fons d'ull. Posteriorment, esta informació es combina de diversos modes formant vectors de característiques amb els que s'entrenen avançats mètodes de classificació formulats per a discriminar de manera òptima entre exsudats, microaneurismes, hemorràgies i teixit sa. Per mitjà de diversos experiments, es valida l'habilitat del sistema proposat per a identificar els signes més comuns de la RD i DMAE. Per a això s'empren bases de dades públiques amb un alt grau de variabilitat sense exlcuir cap imatge. A més, la present tesi també cobrix aspectes bàsics del paradigma de deep learning. Concretament, es presenta un nou mètode basat en xarxes neuronals convolucionales (CNNs) . La tècnica de transferencia de coneixement s'aplica per mitjà del fine-tuning de les arquitectures de CNNs més importants en l'estat de l'art. La detecció i localització d'exudats per mitjà de xarxes neuronals es du a terme en els dos últims experiments d'esta tesi doctoral. Cal destacar que els resultats obtinguts per mitjà de l'extracció de característiques "manual" i posterior classificació es comparen de forma objectiva amb les prediccions obtingudes pel millor model basat en CNNs. Els prometedors resultats obtinguts en esta tesi i el baix cost i portabilitat de les cambres d'adquisión d'imatge de retina podrien facilitar la incorporació dels algoritmes desenrotllats en este treball en un sistema de garbellament automàtic que ajude als especialistes en la detecció de patrons anomálos característics de les dos malalties baix estudi: RD i DMAE.Colomer Granero, A. (2018). Fundus image analysis for automatic screening of ophthalmic pathologies [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/99745TESI

    Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images

    Full text link
    [EN] Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus on one of the most common pathologies in the current society: diabetic retinopathy. The proposed method avoids the necessity of lesion segmentation or candidate map generation before the classification stage. Local binary patterns and granulometric profiles are locally computed to extract texture and morphological information from retinal images. Different combinations of this information feed classification algorithms to optimally discriminate bright and dark lesions from healthy tissues. Through several experiments, the ability of the proposed system to identify diabetic retinopathy signs is validated using different public databases with a large degree of variability and without image exclusion.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness through project DPI2016-77869 and GVA through project PROMETEO/2019/109Colomer, A.; Igual García, J.; Naranjo Ornedo, V. (2020). Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. Sensors. 20(4):1-20. https://doi.org/10.3390/s20041005S120204World Report on Vision. Technical Report, 2019https://www.who.int/publications-detail/world-report-on-visionFong, D. S., Aiello, L., Gardner, T. W., King, G. L., Blankenship, G., Cavallerano, J. D., … Klein, R. (2003). Retinopathy in Diabetes. Diabetes Care, 27(Supplement 1), S84-S87. doi:10.2337/diacare.27.2007.s84COGAN, D. G. (1961). Retinal Vascular Patterns. Archives of Ophthalmology, 66(3), 366. doi:10.1001/archopht.1961.00960010368014Wilkinson, C. ., Ferris, F. L., Klein, R. E., Lee, P. P., Agardh, C. D., Davis, M., … Verdaguer, J. T. (2003). Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology, 110(9), 1677-1682. doi:10.1016/s0161-6420(03)00475-5Universal Eye Health: A Global Action Plan 2014–2019. Technical Reporthttps://www.who.int/blindness/actionplan/en/Salamat, N., Missen, M. M. S., & Rashid, A. (2019). Diabetic retinopathy techniques in retinal images: A review. Artificial Intelligence in Medicine, 97, 168-188. doi:10.1016/j.artmed.2018.10.009Qureshi, I., Ma, J., & Shaheed, K. (2019). A Hybrid Proposed Fundus Image Enhancement Framework for Diabetic Retinopathy. Algorithms, 12(1), 14. doi:10.3390/a12010014Morales, S., Engan, K., Naranjo, V., & Colomer, A. (2017). Retinal Disease Screening Through Local Binary Patterns. IEEE Journal of Biomedical and Health Informatics, 21(1), 184-192. doi:10.1109/jbhi.2015.2490798Asiri, N., Hussain, M., Al Adel, F., & Alzaidi, N. (2019). Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey. Artificial Intelligence in Medicine, 99, 101701. doi:10.1016/j.artmed.2019.07.009Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., … Webster, D. R. (2016). Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA, 316(22), 2402. doi:10.1001/jama.2016.17216Prentašić, P., & Lončarić, S. (2016). Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion. Computer Methods and Programs in Biomedicine, 137, 281-292. doi:10.1016/j.cmpb.2016.09.018Costa, P., Galdran, A., Meyer, M. I., Niemeijer, M., Abramoff, M., Mendonca, A. M., & Campilho, A. (2018). End-to-End Adversarial Retinal Image Synthesis. IEEE Transactions on Medical Imaging, 37(3), 781-791. doi:10.1109/tmi.2017.2759102De la Torre, J., Valls, A., & Puig, D. (2020). A deep learning interpretable classifier for diabetic retinopathy disease grading. Neurocomputing, 396, 465-476. doi:10.1016/j.neucom.2018.07.102Diaz-Pinto, A., Colomer, A., Naranjo, V., Morales, S., Xu, Y., & Frangi, A. F. (2019). Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment. IEEE Transactions on Medical Imaging, 38(9), 2211-2218. doi:10.1109/tmi.2019.2903434Walter, T., Klein, J., Massin, P., & Erginay, A. (2002). A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina. IEEE Transactions on Medical Imaging, 21(10), 1236-1243. doi:10.1109/tmi.2002.806290Welfer, D., Scharcanski, J., & Marinho, D. R. (2010). A coarse-to-fine strategy for automatically detecting exudates in color eye fundus images. Computerized Medical Imaging and Graphics, 34(3), 228-235. doi:10.1016/j.compmedimag.2009.10.001Mookiah, M. R. K., Acharya, U. R., Martis, R. J., Chua, C. K., Lim, C. M., Ng, E. Y. K., & Laude, A. (2013). Evolutionary algorithm based classifier parameter tuning for automatic diabetic retinopathy grading: A hybrid feature extraction approach. Knowledge-Based Systems, 39, 9-22. doi:10.1016/j.knosys.2012.09.008Zhang, X., Thibault, G., Decencière, E., Marcotegui, B., Laÿ, B., Danno, R., … Erginay, A. (2014). Exudate detection in color retinal images for mass screening of diabetic retinopathy. Medical Image Analysis, 18(7), 1026-1043. doi:10.1016/j.media.2014.05.004Sopharak, A., Uyyanonvara, B., Barman, S., & Williamson, T. H. (2008). Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods. Computerized Medical Imaging and Graphics, 32(8), 720-727. doi:10.1016/j.compmedimag.2008.08.009Giancardo, L., Meriaudeau, F., Karnowski, T. P., Li, Y., Garg, S., Tobin, K. W., & Chaum, E. (2012). Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Medical Image Analysis, 16(1), 216-226. doi:10.1016/j.media.2011.07.004Amel, F., Mohammed, M., & Abdelhafid, B. (2012). Improvement of the Hard Exudates Detection Method Used For Computer- Aided Diagnosis of Diabetic Retinopathy. International Journal of Image, Graphics and Signal Processing, 4(4), 19-27. doi:10.5815/ijigsp.2012.04.03Usman Akram, M., Khalid, S., Tariq, A., Khan, S. A., & Azam, F. (2014). Detection and classification of retinal lesions for grading of diabetic retinopathy. Computers in Biology and Medicine, 45, 161-171. doi:10.1016/j.compbiomed.2013.11.014Akram, M. U., Tariq, A., Khan, S. A., & Javed, M. Y. (2014). Automated detection of exudates and macula for grading of diabetic macular edema. Computer Methods and Programs in Biomedicine, 114(2), 141-152. doi:10.1016/j.cmpb.2014.01.010Quellec, G., Lamard, M., Abràmoff, M. D., Decencière, E., Lay, B., Erginay, A., … Cazuguel, G. (2012). A multiple-instance learning framework for diabetic retinopathy screening. Medical Image Analysis, 16(6), 1228-1240. doi:10.1016/j.media.2012.06.003Decencière, E., Cazuguel, G., Zhang, X., Thibault, G., Klein, J.-C., Meyer, F., … Chabouis, A. (2013). TeleOphta: Machine learning and image processing methods for teleophthalmology. IRBM, 34(2), 196-203. doi:10.1016/j.irbm.2013.01.010Abràmoff, M. D., Folk, J. C., Han, D. P., Walker, J. D., Williams, D. F., Russell, S. R., … Niemeijer, M. (2013). Automated Analysis of Retinal Images for Detection of Referable Diabetic Retinopathy. JAMA Ophthalmology, 131(3), 351. doi:10.1001/jamaophthalmol.2013.1743Almotiri, J., Elleithy, K., & Elleithy, A. (2018). Retinal Vessels Segmentation Techniques and Algorithms: A Survey. Applied Sciences, 8(2), 155. doi:10.3390/app8020155Thakur, N., & Juneja, M. (2018). Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma. Biomedical Signal Processing and Control, 42, 162-189. doi:10.1016/j.bspc.2018.01.014Bertalmio, M., Sapiro, G., Caselles, V., & Ballester, C. (2000). Image inpainting. Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’00. doi:10.1145/344779.344972Qureshi, M. A., Deriche, M., Beghdadi, A., & Amin, A. (2017). A critical survey of state-of-the-art image inpainting quality assessment metrics. Journal of Visual Communication and Image Representation, 49, 177-191. doi:10.1016/j.jvcir.2017.09.006Colomer, A., Naranjo, V., Engan, K., & Skretting, K. (2017). Assessment of sparse-based inpainting for retinal vessel removal. Signal Processing: Image Communication, 59, 73-82. doi:10.1016/j.image.2017.03.018Morales, S., Naranjo, V., Angulo, J., & Alcaniz, M. (2013). Automatic Detection of Optic Disc Based on PCA and Mathematical Morphology. IEEE Transactions on Medical Imaging, 32(4), 786-796. doi:10.1109/tmi.2013.2238244Ojala, T., Pietikäinen, M., & Harwood, D. (1996). A comparative study of texture measures with classification based on featured distributions. Pattern Recognition, 29(1), 51-59. doi:10.1016/0031-3203(95)00067-4Ojala, T., Pietikainen, M., & Maenpaa, T. (2002). Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7), 971-987. doi:10.1109/tpami.2002.1017623Breiman, L. (2001). Machine Learning, 45(1), 5-32. doi:10.1023/a:1010933404324Chang, C.-C., & Lin, C.-J. (2011). LIBSVM. ACM Transactions on Intelligent Systems and Technology, 2(3), 1-27. doi:10.1145/1961189.1961199Tapia, S. L., Molina, R., & de la Blanca, N. P. (2016). Detection and localization of objects in Passive Millimeter Wave Images. 2016 24th European Signal Processing Conference (EUSIPCO). doi:10.1109/eusipco.2016.7760619Jin Huang, & Ling, C. X. (2005). Using AUC and accuracy in evaluating learning algorithms. IEEE Transactions on Knowledge and Data Engineering, 17(3), 299-310. doi:10.1109/tkde.2005.50Prati, R. C., Batista, G. E. A. P. A., & Monard, M. C. (2011). A Survey on Graphical Methods for Classification Predictive Performance Evaluation. IEEE Transactions on Knowledge and Data Engineering, 23(11), 1601-1618. doi:10.1109/tkde.2011.59Mandrekar, J. N. (2010). Receiver Operating Characteristic Curve in Diagnostic Test Assessment. Journal of Thoracic Oncology, 5(9), 1315-1316. doi:10.1097/jto.0b013e3181ec173dRocha, A., Carvalho, T., Jelinek, H. F., Goldenstein, S., & Wainer, J. (2012). Points of Interest and Visual Dictionaries for Automatic Retinal Lesion Detection. IEEE Transactions on Biomedical Engineering, 59(8), 2244-2253. doi:10.1109/tbme.2012.2201717Júnior, S. B., & Welfer, D. (2013). Automatic Detection of Microaneurysms and Hemorrhages in Color Eye Fundus Images. International Journal of Computer Science and Information Technology, 5(5), 21-37. doi:10.5121/ijcsit.2013.550

    Retinal Disease Screening through Local Binary Patterns

    Full text link
    © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”This work investigates discrimination capabilities in the texture of fundus images to differentiate between pathological and healthy images. For this purpose, the performance of Local Binary Patterns (LBP) as a texture descriptor for retinal images has been explored and compared with other descriptors such as LBP filtering (LBPF) and local phase quantization (LPQ). The goal is to distinguish between diabetic retinopathy (DR), agerelated macular degeneration (AMD) and normal fundus images analysing the texture of the retina background and avoiding a previous lesion segmentation stage. Five experiments (separating DR from normal, AMD from normal, pathological from normal, DR from AMD and the three different classes) were designed and validated with the proposed procedure obtaining promising results. For each experiment, several classifiers were tested. An average sensitivity and specificity higher than 0.86 in all the cases and almost of 1 and 0.99, respectively, for AMD detection were achieved. These results suggest that the method presented in this paper is a robust algorithm for describing retina texture and can be useful in a diagnosis aid system for retinal disease screening.This work was supported by NILS Science and Sustainability Programme (010-ABEL-IM-2013) and by the Ministerio de Economia y Competitividad of Spain, Project ACRIMA (TIN2013-46751-R). The work of A. Colomer was supported by the Spanish Government under the FPI Grant BES-2014-067889.Morales, S.; Engan, K.; Naranjo Ornedo, V.; Colomer, A. (2015). Retinal Disease Screening through Local Binary Patterns. IEEE Journal of Biomedical and Health Informatics. (99):1-8. https://doi.org/10.1109/JBHI.2015.2490798S189

    Glaucoma Detection from Raw SD-OCT Volumes: a Novel Approach Focused on Spatial Dependencies

    Full text link
    [EN] Background and objective:Glaucoma is the leading cause of blindness worldwide. Many studies based on fundus image and optical coherence tomography (OCT) imaging have been developed in the literature to help ophthalmologists through artificial-intelligence techniques. Currently, 3D spectral-domain optical coherence tomography (SD-OCT) samples have become more important since they could enclose promising information for glaucoma detection. To analyse the hidden knowledge of the 3D scans for glaucoma detection, we have proposed, for the first time, a deep-learning methodology based on leveraging the spatial dependencies of the features extracted from the B-scans. Methods:The experiments were performed on a database composed of 176 healthy and 144 glaucomatous SD-OCT volumes centred on the optic nerve head (ONH). The proposed methodology consists of two well-differentiated training stages: a slide-level feature extractor and a volume-based predictive model. The slide-level discriminator is characterised by two new, residual and attention, convolutional modules which are combined via skip-connections with other fine-tuned architectures. Regarding the second stage, we first carried out a data-volume conditioning before extracting the features from the slides of the SD-OCT volumes. Then, Long Short-Term Memory (LSTM) networks were used to combine the recurrent dependencies embedded in the latent space to provide a holistic feature vector, which was generated by the proposed sequential-weighting module (SWM). Results:The feature extractor reports AUC values higher than 0.93 both in the primary and external test sets. Otherwise, the proposed end-to-end system based on a combination of CNN and LSTM networks achieves an AUC of 0.8847 in the prediction stage, which outperforms other state-of-the-art approaches intended for glaucoma detection. Additionally, Class Activation Maps (CAMs) were computed to highlight the most interesting regions per B-scan when discerning between healthy and glaucomatous eyes from raw SD-OCT volumes. Conclusions:The proposed model is able to extract the features from the B-scans of the volumes and combine the information of the latent space to perform a volume-level glaucoma prediction. Our model, which combines residual and attention blocks with a sequential weighting module to refine the LSTM outputs, surpass the results achieved from current state-of-the-art methods focused on 3D deep-learning architectures.The authors gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used here.This work has been funded by GALAHAD project [H2020-ICT-2016-2017, 732613], SICAP project (DPI2016-77869-C2-1-R) and GVA through project PROMETEO/2019/109. The work of Gabriel García has been supported by the State Research Spanish Agency PTA2017-14610-I.García-Pardo, JG.; Colomer, A.; Naranjo Ornedo, V. (2021). Glaucoma Detection from Raw SD-OCT Volumes: a Novel Approach Focused on Spatial Dependencies. Computer Methods and Programs in Biomedicine. 200:1-16. https://doi.org/10.1016/j.cmpb.2020.105855S116200Weinreb, R. N., & Khaw, P. T. (2004). Primary open-angle glaucoma. The Lancet, 363(9422), 1711-1720. doi:10.1016/s0140-6736(04)16257-0Jonas, J. B., Aung, T., Bourne, R. R., Bron, A. M., Ritch, R., & Panda-Jonas, S. (2018). Glaucoma – Authors’ reply. The Lancet, 391(10122), 740. doi:10.1016/s0140-6736(18)30305-2Tham, Y.-C., Li, X., Wong, T. Y., Quigley, H. A., Aung, T., & Cheng, C.-Y. (2014). Global Prevalence of Glaucoma and Projections of Glaucoma Burden through 2040. Ophthalmology, 121(11), 2081-2090. doi:10.1016/j.ophtha.2014.05.013Huang, D., Swanson, E. A., Lin, C. P., Schuman, J. S., Stinson, W. G., Chang, W., … Fujimoto, J. G. (1991). Optical Coherence Tomography. Science, 254(5035), 1178-1181. doi:10.1126/science.1957169Medeiros, F. A., Zangwill, L. M., Alencar, L. M., Bowd, C., Sample, P. A., Susanna, R., & Weinreb, R. N. (2009). Detection of Glaucoma Progression with Stratus OCT Retinal Nerve Fiber Layer, Optic Nerve Head, and Macular Thickness Measurements. Investigative Opthalmology & Visual Science, 50(12), 5741. doi:10.1167/iovs.09-3715Sinthanayothin, C., Boyce, J. F., Williamson, T. H., Cook, H. L., Mensah, E., Lal, S., & Usher, D. (2002). Automated detection of diabetic retinopathy on digital fundus images. Diabetic Medicine, 19(2), 105-112. doi:10.1046/j.1464-5491.2002.00613.xWalter, T., Massin, P., Erginay, A., Ordonez, R., Jeulin, C., & Klein, J.-C. (2007). Automatic detection of microaneurysms in color fundus images. Medical Image Analysis, 11(6), 555-566. doi:10.1016/j.media.2007.05.001Diaz-Pinto, A., Colomer, A., Naranjo, V., Morales, S., Xu, Y., & Frangi, A. F. (2019). Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment. IEEE Transactions on Medical Imaging, 38(9), 2211-2218. doi:10.1109/tmi.2019.2903434Bussel, I. I., Wollstein, G., & Schuman, J. S. (2013). OCT for glaucoma diagnosis, screening and detection of glaucoma progression. British Journal of Ophthalmology, 98(Suppl 2), ii15-ii19. doi:10.1136/bjophthalmol-2013-304326Varma, R., Steinmann, W. C., & Scott, I. U. (1992). Expert Agreement in Evaluating the Optic Disc for Glaucoma. Ophthalmology, 99(2), 215-221. doi:10.1016/s0161-6420(92)31990-6Jaffe, G. J., & Caprioli, J. (2004). Optical coherence tomography to detect and manage retinal disease and glaucoma. American Journal of Ophthalmology, 137(1), 156-169. doi:10.1016/s0002-9394(03)00792-xHood, D. C., & Raza, A. S. (2014). On improving the use of OCT imaging for detecting glaucomatous damage. British Journal of Ophthalmology, 98(Suppl 2), ii1-ii9. doi:10.1136/bjophthalmol-2014-305156Bizios, D., Heijl, A., Hougaard, J. L., & Bengtsson, B. (2010). Machine learning classifiers for glaucoma diagnosis based on classification of retinal nerve fibre layer thickness parameters measured by Stratus OCT. Acta Ophthalmologica, 88(1), 44-52. doi:10.1111/j.1755-3768.2009.01784.xKim, S. J., Cho, K. J., & Oh, S. (2017). Development of machine learning models for diagnosis of glaucoma. PLOS ONE, 12(5), e0177726. doi:10.1371/journal.pone.0177726Medeiros, F. A., Jammal, A. A., & Thompson, A. C. (2019). From Machine to Machine. Ophthalmology, 126(4), 513-521. doi:10.1016/j.ophtha.2018.12.033An, G., Omodaka, K., Hashimoto, K., Tsuda, S., Shiga, Y., Takada, N., … Nakazawa, T. (2019). Glaucoma Diagnosis with Machine Learning Based on Optical Coherence Tomography and Color Fundus Images. Journal of Healthcare Engineering, 2019, 1-9. doi:10.1155/2019/4061313Fang, L., Cunefare, D., Wang, C., Guymer, R. H., Li, S., & Farsiu, S. (2017). Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search. Biomedical Optics Express, 8(5), 2732. doi:10.1364/boe.8.002732Pekala, M., Joshi, N., Liu, T. Y. A., Bressler, N. M., DeBuc, D. C., & Burlina, P. (2019). Deep learning based retinal OCT segmentation. Computers in Biology and Medicine, 114, 103445. doi:10.1016/j.compbiomed.2019.103445Barella, K. A., Costa, V. P., Gonçalves Vidotti, V., Silva, F. R., Dias, M., & Gomi, E. S. (2013). Glaucoma Diagnostic Accuracy of Machine Learning Classifiers Using Retinal Nerve Fiber Layer and Optic Nerve Data from SD-OCT. Journal of Ophthalmology, 2013, 1-7. doi:10.1155/2013/789129Vidotti, V. G., Costa, V. P., Silva, F. R., Resende, G. M., Cremasco, F., Dias, M., & Gomi, E. S. (2013). Sensitivity and Specificity of Machine Learning Classifiers and Spectral Domain OCT for the Diagnosis of Glaucoma. European Journal of Ophthalmology, 23(1), 61-69. doi:10.5301/ejo.5000183Xu, J., Ishikawa, H., Wollstein, G., Bilonick, R. A., Folio, L. S., Nadler, Z., … Schuman, J. S. (2013). Three-Dimensional Spectral-Domain Optical Coherence Tomography Data Analysis for Glaucoma Detection. PLoS ONE, 8(2), e55476. doi:10.1371/journal.pone.0055476Maetschke, S., Antony, B., Ishikawa, H., Wollstein, G., Schuman, J., & Garnavi, R. (2019). A feature agnostic approach for glaucoma detection in OCT volumes. PLOS ONE, 14(7), e0219126. doi:10.1371/journal.pone.0219126Ran, A. R., Cheung, C. Y., Wang, X., Chen, H., Luo, L., Chan, P. P., … Tham, C. C. (2019). Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis. The Lancet Digital Health, 1(4), e172-e182. doi:10.1016/s2589-7500(19)30085-8De Fauw, J., Ledsam, J. R., Romera-Paredes, B., Nikolov, S., Tomasev, N., Blackwell, S., … Ronneberger, O. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine, 24(9), 1342-1350. doi:10.1038/s41591-018-0107-6Wang, X., Chen, H., Ran, A.-R., Luo, L., Chan, P. P., Tham, C. C., … Heng, P.-A. (2020). Towards multi-center glaucoma OCT image screening with semi-supervised joint structure and function multi-task learning. Medical Image Analysis, 63, 101695. doi:10.1016/j.media.2020.101695Ran, A. R., Shi, J., Ngai, A. K., Chan, W.-Y., Chan, P. P., Young, A. L., … Cheung, C. Y. (2019). Artificial intelligence deep learning algorithm for discriminating ungradable optical coherence tomography three-dimensional volumetric optic disc scans. Neurophotonics, 6(04), 1. doi:10.1117/1.nph.6.4.041110Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735-1780. doi:10.1162/neco.1997.9.8.1735Jiang, J., Liu, X., Liu, L., Wang, S., Long, E., Yang, H., … Lin, H. (2018). Predicting the progression of ophthalmic disease based on slit-lamp images using a deep temporal sequence network. PLOS ONE, 13(7), e0201142. doi:10.1371/journal.pone.0201142Tajbakhsh, N., Shin, J. Y., Gurudu, S. R., Hurst, R. T., Kendall, C. B., Gotway, M. B., & Liang, J. (2016). Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE Transactions on Medical Imaging, 35(5), 1299-1312. doi:10.1109/tmi.2016.2535302Graves, A., Liwicki, M., Fernandez, S., Bertolami, R., Bunke, H., & Schmidhuber, J. (2009). A Novel Connectionist System for Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(5), 855-868. doi:10.1109/tpami.2008.13

    Assessment of sparse-based inpainting for retinal vessel removal

    Full text link
    [EN] Some important eye diseases, like macular degeneration or diabetic retinopathy, can induce changes visible on the retina, for example as lesions. Segmentation of lesions or extraction of textural features from the fundus images are possible steps towards automatic detection of such diseases which could facilitate screening as well as provide support for clinicians. For the task of detecting significant features, retinal blood vessels are considered as being interference on the retinal images. If these blood vessel structures could be suppressed, it might lead to a more accurate segmentation of retinal lesions as well as a better extraction of textural features to be used for pathology detection. This work proposes the use of sparse representations and dictionary learning techniques for retinal vessel inpainting. The performance of the algorithm is tested for greyscale and RGB images from the DRIVE and STARE public databases, employing different neighbourhoods and sparseness factors. Moreover, a comparison with the most common inpainting family, diffusion-based methods, is carried out. For this purpose, two different ways of assessing the quality of the inpainting are presented and used to evaluate the results of the non-artificial inpainting, i.e. where a reference image does not exist. The results suggest that the use of sparse-based inpainting performs very well for retinal blood vessels removal which will be useful for the future detection and classification of eye diseases. (C) 2017 Elsevier B.V. All rights reserved.This work was supported by NILS Science and Sustainability Programme (014-ABEL-IM-2013) and by the Ministerio de Economia y Competitividad of Spain, Project ACRIMA (TIN2013-46751-R). The work of Adrian Colomer has been supported by the Spanish Government under the FPI Grant BES-2014-067889.Colomer, A.; Naranjo Ornedo, V.; Engan, K.; Skretting, K. (2017). Assessment of sparse-based inpainting for retinal vessel removal. Signal Processing: Image Communication. 59:73-82. https://doi.org/10.1016/j.image.2017.03.018S73825

    Landscape- and field-scale control of spatial variation of soil properties in Mediterranean montane meadows

    Get PDF
    Las figuras que contiene el documento se localizan al final del mismoSoil properties of terrestrial ecosystems are controlled by a variety of factors that operate at different scales. We tested the role of abiotic and biotic factors that potentially influence spatial gradients of total ion content, acidity, carbon, total nitrogen, and total phosphorous in topsoil. We studied a network of Mediterranean montane meadows that spans a 2000-m altitudinal gradient. The analyzed factors were grouped into two spatial scales: a landscape scale (climate and land form) and a field scale (topography, soil texture, soil moisture, and plant community composition). Total ion content and acidity are the major and independent variation trends of soil geochemistry. Soil acidity, carbon, and nitrogen increased along the altitudinal gradient whereas there was no relationship between total ion content and phosphorous and elevation. Climate had no direct influence on the analyzed gradients; all effects of climate were indirect through plant community composition and/or soil moisture. The results point to three types of models that explain the gradients of soil chemical composition: (1) a predominantly biotic control of carbon and nitrogen, (2) a predominantly abiotic control of acidity, and (3) a combined biotic and abiotic control of total ionic content. No direct or indirect effects explained the gradient of phosphorous. In our study region (central Spain), climate is predicted to turn more arid and soils will lose moisture. According to our models, this will result in less acid and fertile soils, and any change in plant community composition will modify gradients of soil carbon, nitrogen, total ion content, and acidity

    A Novel Self-Learning Framework for Bladder Cancer Grading Using Histopathological Images

    Full text link
    Recently, bladder cancer has been significantly increased in terms of incidence and mortality. Currently, two subtypes are known based on tumour growth: non-muscle invasive (NMIBC) and muscle-invasive bladder cancer (MIBC). In this work, we focus on the MIBC subtype because it is of the worst prognosis and can spread to adjacent organs. We present a self-learning framework to grade bladder cancer from histological images stained via immunohistochemical techniques. Specifically, we propose a novel Deep Convolutional Embedded Attention Clustering (DCEAC) which allows classifying histological patches into different severity levels of the disease, according to the patterns established in the literature. The proposed DCEAC model follows a two-step fully unsupervised learning methodology to discern between non-tumour, mild and infiltrative patterns from high-resolution samples of 512x512 pixels. Our system outperforms previous clustering-based methods by including a convolutional attention module, which allows refining the features of the latent space before the classification stage. The proposed network exceeds state-of-the-art approaches by 2-3% across different metrics, achieving a final average accuracy of 0.9034 in a multi-class scenario. Furthermore, the reported class activation maps evidence that our model is able to learn by itself the same patterns that clinicians consider relevant, without incurring prior annotation steps. This fact supposes a breakthrough in muscle-invasive bladder cancer grading which bridges the gap with respect to train the model on labelled data

    Self-learning for weakly supervised Gleason grading of local patterns

    Full text link
    © 2021 IEEE. Personal use of this material is permitted. Permissíon from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertisíng or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.[EN] Prostate cancer is one of the main diseases affecting men worldwide. The gold standard for diagnosis and prognosis is the Gleason grading system. In this process, pathologists manually analyze prostate histology slides under microscope, in a high time-consuming and subjective task. In the last years, computer-aided-diagnosis (CAD) systems have emerged as a promising tool that could support pathologists in the daily clinical practice. Nevertheless, these systems are usually trained using tedious and prone-to-error pixel-level annotations of Gleason grades in the tissue. To alleviate the need of manual pixel-wise labeling, just a handful of works have been presented in the literature. Furthermore, despite the promising results achieved on global scoring the location of cancerous patterns in the tissue is only qualitatively addressed. These heatmaps of tumor regions, however, are crucial to the reliability of CAD systems as they provide explainability to the system's output and give confidence to pathologists that the model is focusing on medical relevant features. Motivated by this, we propose a novel weakly-supervised deeplearning model, based on self-learning CNNs, that leverages only the global Gleason score of gigapixel whole slide images during training to accurately perform both, grading of patch-level patterns and biopsy-level scoring. To evaluate the performance of the proposed method, we perform extensive experiments on three different external datasets for the patch-level Gleason grading, and on two different test sets for global Grade Group prediction. We empirically demonstrate that our approach outperforms its supervised counterpart on patch-level Gleason grading by a large margin, as well as state-of-the-art methods on global biopsylevel scoring. Particularly, the proposed model brings an average improvement on the Cohen's quadratic kappa (kappa) score of nearly 18% compared to full-supervision for the patch-level Gleason grading task. This suggests that the absence of the annotator's bias in our approach and the capability of using large weakly labeled datasets during training leads to higher performing and more robust models. Furthermore, raw features obtained from the patchlevel classifier showed to generalize better than previous approaches in the literature to the subjective global biopsylevel scoring.This work was supported by the Spanish Ministry of Economy and Competitiveness through Projects DPI2016-77869 and PID2019-105142RB-C21.Silva-Rodríguez, J.; Colomer, A.; Dolz, J.; Naranjo Ornedo, V. (2021). Self-learning for weakly supervised Gleason grading of local patterns. IEEE Journal of Biomedical and Health Informatics. 25(8):3094-3104. https://doi.org/10.1109/JBHI.2021.3061457S3094310425
    corecore